ECE5725 Final Project

Lidar Robot

Zihao Xue(zx278), Iman Nandi(isn7)

5/19/2020

Get Started

Project information

Objective

Project Objective

The objective of this project was to showcase a novel and inexpensive way of using LiDAR technology in a low power, portable embedded system. Using LiDAR technology, a map of a room or multiple rooms could be made automatically by navigating the LiDAR mounted to a robot autonomously without colliding with any obstacles it encounters. As a result, these tasks can be accomplished without any additional sensor feedback and can aid in understanding visually how indoor surroundings look like.

Introduction

Project Introduction

For our final project, we built a LiDAR robot to map a room while avoiding obstacles with autonomous navigation. A Neato XV-11 LiDAR with 360 degrees of freedom was used for sampling data in the form of 90 packets (4 degrees of data per packet) over a serial interface using an Arduino Pro Micro controller to retrieve serial data from the Neato LiDAR’s 4-pin JST serial connector (Rx, Tx, Vcc, Gnd) through a microUSB-to-USB connection to a Raspberry Pi (RPi). An RC car kit with 4 motors was driven by the RPi’s hardware pulse-width modulation (PWM) signals for each side of the RC car to navigate around a room and avoid obstacles using sampled LiDAR data. A visual map of the room was created using a simultaneous localization and mapping (SLAM) algorithm.

All source code was developed in Python 2.7 using multiple processes and threads for handling several tasks simultaneously. A Raspberry Pi camera is used to receive a visual robot perspective of the room as it navigates. A portable battery pack supply was used to power the RPi, and a separate battery supply was used to power the LiDAR to avoid powering it directly from the RPi for rotation reliability.

  • Navigates and avoids obstacles
  • Maps the nearby surroundings
  • Records a perspective view of the surroundings and recognize traffic signs

Design

Some insights about Design

Hardware

The hardware design consists of a Raspberry Pi, a Neato XV-11 LiDAR, a XV LiDAR Controller with a built in Arduino Pro Micro, a 3D-printed XV LiDAR Mount for the LiDAR and LiDAR Controller, a 3.3V voltage regulator for delivering steady voltage to the LiDAR, and a Pi Camera Module for receiving a robot perspective view recording and recognizing traffic signs, as well as an RC car kit with 2 batteries for power, 2 L298N motor drivers for driving 4 motors connected to 4 wheels, and a chassis. The LiDAR controller allows for the 4-pin serial interface, consisting of Rx, Tx, Vcc, and Gnd, to be converted from a JST 4-pin connection to a USB connection to the Raspberry Pi controlling and receiving data from the LiDAR, as opposed to a universal asynchronous receiver-transmitter (UART) interface with the general-purpose input/output (GPIO) pins.
The batteries powering the motor drivers for the RC car kit deliver a fixed power to the LiDAR for rotation to ensure that the rotation is reliable and at a steady speed of approximately 285 RPM, where the recommended maximum speed for sampling data quickly but accurately is 300 RPM, and the actual maximum speed is 349 RPM. While the LiDAR controller does allow power to be delivered from the Raspberry Pi through the same USB connection controlling the serial interface, this demands a lot of current to be pulled from the Raspberry Pi while still not maintaining a consistent LiDAR rotation speed. The LiDAR has 360 degrees of data that are sampled at every rotation in the form of 90 packets that hold 4 degrees of data each. Using the Pi Camera and simple digital signal processing (DSP) techniques, 3 different traffic sign signals are able to be recognized so that the robot can react by either turning left, turning right, or stopping, but this is not a finalized feature for real-time applications.

Robot Overview

The general layout of our lidar robot. The battery in the front is to power Raspberry Pi

Robot Layout

The top view of our lidar robot. The battery is to power lidar and motor drivers

Motor Drivers with Prototyping Board

Prototyping Board connects those motor drivers with Raspberry Pi

Second Layer Overview

The second layer of our lidar robot. Motor Drivers connect to Raspberry Pi

Software

The serial packets received from the LiDAR to the RPi are composed of 22 bytes, which are composed of a start value that should always be 0xFA, an index that indicates which of the 90 packets corresponding to 4 degrees of distance data in the current packet, the current speed of the LiDAR broken into speed_L and speed_H, 4 bytes of data primarily corresponding to distance data and signal strength/quality, and a checksum broken into checksum_L and checksum_H. For reading the serial packets effectively, we implement a state machine in lidarreader.py that has a state for confirming that the serial data’s start value is in fact 0xFA, a next state for identifying what angle data a packet corresponds to by offsetting the index by 0xA0 (packet index expected to range from 0xA0 to 0xF9), and a final state for retrieving the distance, quality, and checksum data from a clean packet. The distance data is checksummed in order to compare to the received checksum value before finally using the distance data for avoiding obstacles.
Since the LiDAR data has 360 degrees of freedom, we can retrieve the distance from all these angles, which is a much larger set of data to use for avoiding obstacles compared to an infrared (IR) sensor. Then, we can use all the data coming from the “front side” to help navigate the robot to avoid obstacles. It was chosen to make the “front side” where the LiDAR retrieves its data at a perceived 180° angle since it proved to be easier to analyze indexed data across a continuous range of angles in software. There is a left container and right container vector in the lidarreader.py script that are used to record distance values received by serial packets in mm for the 2 quadrants of angle data on the “front side” of the robot. There is a method implemented for further ensuring data integrity of the distance data by checking the quality value is not 0 for the distance, and that the distance is greater than 150 mm since 15 cm is the minimum distance the LiDAR can detect.
Figure: Lidar Container Diagram
The containers’ data help to find a minimum value of distance on each side of the front of the LiDAR such that the robot can intelligently navigate away from obstacles. Based on the container data and distance data from different angles, we used hardware PWM to drive the motor drivers in motordriver.py with the Python pigpio library for moving forwards, moving backwards, rotating left, and rotating right. A speed lookup table is used to help navigate at different speeds according to the closest angle data retrieved. This proved to be useful in making more intelligent robot rotations based on how close sampled distance data was, as well as save power by avoiding making turns at a high PWM duty cycle constantly.
For mapping the LiDAR data while avoiding obstacles, we used the SLAM algorithm with the BreezySLAM Python package, which is an algorithm that generally is used to construct a map of unknown surroundings while simultaneously keeping track of the position of a navigating robot. The BreezySLAM package is installed on the Raspberry Pi for utilizing the built-in SLAM algorithm, along with the PyRoboViz package for a visual mapping, using the distutils method of installing packages locally from open-source. The BreezySLAM package uses a class called RMHC_SLAM that implements random-mutation hill-climbing search from the start position for keeping track of the robot’s position while updating the map. It is highly efficient for real-time applications since it uses Python C extensions in its backend to make its Python execution as quick as its C++ counterpart available. Serial data from lidarreader.py is used as an input parameter to the update() method for an instance of the RMHC_SLAM class. The map is outputted on a matplotlib diagram in real time as the robot is moving using vector data that is collected from the SLAM algorithm computation.
Using the Pi Camera Module, video data is able to be recorded while obstacle detection and mapping are executed using OpenCV. Traffic signs for turning left, turning right, and stopping were preprocessed as template images to be able identify these signs with simple DSP techniques for image processing so that the robot could react to them accordingly. This served as an extra feature that could be useful in the project when thinking about it as an overall robot package.
To be able to manage all these tasks in a real-time environment and organize the tasks effectively, we used threading and multiprocessing Python libraries, and divided tasks into different classes, for our main script robot.py that utilizes the class Robot. There were two processes in our project’s final version of execution, where a main process was used for constructing the SLAM algorithm map with two child threads for reading LiDAR serial data and navigating the robot. A second process was used for recording with the Pi Camera, where 3 threads were used for detecting each of the 3 traffic signs. A message queue was used to communicate recognized traffic signs in the camera process to the main process so that the robot could navigate accordingly and confirm it updated its movement. In general, to be able to view matplotlib mapping data, as well as the camera recording and image processing data, we used a Virtual Network Computing (VNC) connection to view the Raspberry Pi desktop for a visual display during execution. All Python source code was written for Python 2.7.
Figure: Flow Diagram of Threading and Multiprocessing
Figure: Software Object-Oriented Design Principle

Testing

Testing Strategy

When we first received the LiDAR and the LiDAR controller, initial testing was done to check the serial data was being received through a USB port from a laptop computer. To do this, the Arduino integrated development environment (IDE) software was configured for the LiDAR controller’s Arduino Pro Micro and the serial monitor was used to read data using a protocol of different commands that the LiDAR understood for serial transmission. The sample data did not look initially very promising because of the inconsistency and fluctuation of the rotation speed of the LiDAR when first tested when receiving power from the RPi, indicating that more current than the RPi could reliably provide was being pulled by the LiDAR. This is why we transitioned to a separate power supply for the LiDAR, to avoid power strain on the RPi and to have more reliable serial data.
For hardware PWM, we used an existing robot with servos made during the ECE.5725 course lab experiments to learn how to drive the servos with a hardware PWM from the RPi that utilized the appropriate pins for using the RPi’s PWM peripheral. A hardware PWM with the pigpio library was essential for a better real-time response to avoiding obstacles since the alternative software PWM with the RPi.GPIO library has more latency. It was relatively intuitive to use hardware PWM given our existing experience with software PWM, so we quickly translated our initial test to the motordriver.py Python script setup that is used to drive 2 motor drivers for controlling motors on an RC car.
In terms of robot movement, we initially used a “coasting” method of turning with all wheels moving forward, where the left wheels would rotate with a higher PWM duty cycle than the right wheels, or vice versa. We observed that this methodology caused the robot to end up colliding into an obstacle as it’s trying to rotate away from it, which seemed to require a lot of complex movement methods to make work. An easy solution to this was to perform axial turns where one set of wheels on one side would move forward, while the other side moved backwards, to turn the robot along its vertical axis. This prevented the body of the robot from getting in the way of it avoiding obstacles and allowed for much more reliable obstacle avoidance. In general, a lot of tuning of the PWM parameters for movement was needed to compensate for the floor surface that was being tested on and to make all movements as smooth as possible, which also aided the performance of the SLAM algorithm mapping functionality.
For serial data, we observed during testing that sometimes, there were brief periods of not having received a data sample on time or data being missampled, which would cause the robot to get too close to an obstacle, under the LiDAR’s minimum detection distance of 15 cm. As explained in our Software section, we extensively made sure that sample data being used was reliable. However, in the case that data that passed through our state machine was still not accurate or the case that there was intermittent latency in data being received, we intelligently compensated for this when the robot was too close to an obstacle from its front, left, or right side by having the robot rotate or back up before continuing normal operation.
For mapping data, we quickly observed that black and reflective objects were not very easy to detect for the LiDAR, which is a drawback we identified when constructing a map. As a primary purpose of this project was to focus on what the LiDAR could accomplish on its own, we did not add additional sensors to the robot for this project. When observing the tracking arrow, we quickly noticed that it was oriented the wrong direction because of our design choice, for software indexing optimization, to consider the back side of the LiDAR at 180° angle to be the front of the robot. This was quickly fixed by re-engineering the BreezySLAM package internals as necessary to have the arrow point the right way and rotate the right direction compared to our robot’s movement. There was also a map shifting behavior that was observed in BreezySLAM's Python matplotlib mapping only when the robot rotated to avoid obstacles. While this was mitigated by slowing the turns down and stopping for some time before continuing movement, the improvement was not significant, and there was a lower limit to how much the robot turns could be slowed down before the floor texture prevented it from turning. Ways to potentially solve this completely is discussed in the Future Work section.
Finally, as our main script utilized threading to perform the essential tasks for this project together in real-time, we observed that threading was not fully concurrent on the RPi when using the Python package for threading since tasks would not occur as quickly as needed when competing for resources in the main process. To solve this, a smaller sleep timer with the Python time package was used in the threads to prevent any given thread from stalling other threads during execution.

Results

Robot test Results

As we set out to perform obstacle detection and mapping using an expensive LiDAR and the RPi as an embedded system, the LiDAR robot was able to do both of these tasks quite effectively, and the extra mile of adding a camera and managing that process execution with imaging processing along with the main robot tasks was explored. Given the tuned PWM parameters, data validation, data error checking, and intelligent movement logic, the robot was able to avoid obstacles around it for an extensive period of time in the house we tested in without any issues or supervision for detrimental collisions, given a relatively smooth floor surface to move around on. The mapping data received using the SLAM algorithm was visually very clear and updated relatively quickly for indoor mapping applications when the LiDAR was static or moving on the robot in a straight line. However, there is definitely room for improvement for rotational movements of the robot because of the map shifting behavior we encountered during this part of execution. Given that the LiDAR’s maximum distance of detection is 3 meters from any given angle, the robot navigation thread of execution is not even a requirement for a relatively small room to be able to map most of the surrounding boundaries accurately, with the exception of black and reflective objects. We were even able to identify ourselves on the maps as black points that were recognized as a boundary. The Pi Camera recording and image processing added an extra touch to the project design, and it further emphasized that the RPi, given its system limitations, was able to handle multiple tasks for this project portably as part of an embedded system operating in a relatively real-time application, given the appropriate power supply for reliable performance.
result1
Figure: SLAM result when the robot remains still
result2
Figure: SLAM result when the robot is running for a while
result3
Figure: SLAM result when the robot starts running
result4
Figure: The robot recognizes "turn left" traffic sign
result5
Figure: The robot recognizes "turn right" traffic sign
result6
Figure: The robot recognizes "stop" traffic sign

Conclusion

Project Conclusion

Our project results were impactful in how they met the goal of our project. An embedded system in a real-time environment was created using an inexpensive LiDAR attached to a robot for navigating around obstacles while mapping the surroundings, and a camera module was added for recording the robot’s perspective, showcasing a potential next step to how the LiDAR robot could be combined with image processing and other sensors to serve a variety of applications. Considering the power distribution in the system was critical to the robot and LiDAR performance since relying on just the Raspberry Pi for power proved to be infeasible for reliable power. Through serial data analysis and testing, it was clear that in order to receive reliable data and respond effectively to it given the inexpensive LiDAR we chose, it was not practical to expect that the raw serial data would be perfect or could even be trusted to be sampled on time. To solve this, several layers of software logic, including a state machine with checksum verification, error checking of clean data processed, and dynamic robot movement capabilities, were essential to ensure that the robot could continuously avoid obstacles without any significant latency. Since doing tasks sequentially did not make sense for this application, proper software class organization and threading mechanisms allowed the BreezySLAM package to be highly effective in mapping data while avoiding obstacles, and the package can be further investigated for re-engineering the mapping abilities of the package for further improvements. As LiDAR, in its more expensive and high-performance form, is a contending sensor in state-of-the-art self-driving cars, this project effectively showcased that an inexpensive LiDAR and a Raspberry Pi can be used in combination with other inexpensive equipment for accomplishing several simple technical applications.

Future Work

Work in the future

There are several aspects that can be improved for this project. In terms of mapping while navigating a room, the map shifting behavior during robot turns can be mitigated by tuning the PWM parameters for obstacle avoidance rotations, as well as modifying the hardware configuration, so that slower PWM duty cycles could be incorporated without causing the robot to get stuck on rough areas of a floor or not have enough rotation speed to rotate as expected. The localization algorithm and dynamics model in the BreezySLAM package would have to be further investigated to see how it performs with a Raspberry Pi and whether it needs to be re-engineered or optimized in some way. Regarding the localization ability of the package, the BreezySLAM package optionally takes in odometry data as an extra parameter to its SLAM algorithm’s update() method to be able to stabilize mapping as the robot navigates. However, using this parameter would require additional hardware such as an accelerometer or similar sensor measuring the velocity of the robot to be able to receive this type of data. Instead of receiving odometry data, the robot that the LiDAR is mounted on could also be configured as a holonomic drive instead to only perform translational movement while avoiding rotational movement, which would prevent the map from shifting as the robot would mostly maintain its orientation along its vertical axis. To correct distortions related to black and reflective objects that the LiDAR can’t detect well, a separate sensor or improved mapping algorithm could be used to identify those areas by either using the separate sensor feedback for constructing those missing boundaries, or using a smoothing method for approximating those boundaries.
In terms of power, a better portable battery pack with 2-2.5 A of current supply instead of 1 A for powering the RPi could be used without a huge tradeoff in weight added to the RC car chassis. This was mainly an issue in the final version of our software design that used multiprocessing to run the Pi Camera in a separate process with three threads for image processing, while simultaneously running the project’s main process with a main thread for mapping and 2 child threads for receiving serial data and obstacle detection. It was definitely important in this project to not strain the RPi on power consumption for supplying the LiDAR, as we had implemented with a separate 2-battery supply, since the RPi already was having a tough time receiving steady power from the portable battery supply we used.
In terms of image processing, the Pi Camera’s reaction to traffic signs and other image data could be improved as an additional feature given an application need. Since only simple DSP techniques were used for image processing, the robot had a tough time processing images while moving around during navigation, and sometimes mistook shapes similar to the traffic signs that were recognized with similarly shaped structures around the room. This could be improved with an optimized image classifier for the Raspberry Pi that is able to perform better for real time applications.

Work Distribution

Work Distribution

team photo
Figure: Team Photo
team photo
Zihao Xue
team photo
Iman Nandi
Specific Task Team Member(s)
Hardware PWM Initial Testing and Implementation on Lab3 robot Iman Nandi
Initial Serial Tests on Arduino Software Iman Nandi
Create motordriver.py to control L298N motor drivers Zihao Xue
SLAM Algorithm Package Configuration Iman Nandi
Serial Data Analysis and Conversion in Python Zihao Xue, Iman Nandi
Serial Data Performance and General Software Debugging Zihao Xue, Iman Nandi
Create robot.py, lidarreader.py and design interfaces for SLAM Algorithm Package Zihao Xue
Threading, Multiprocessing, and Software Class Organization Zihao Xue
Camera Programming Zihao Xue
Final Robot Hardware Setup Zihao Xue, Iman Nandi
PWM Parameter Tuning and Camera Testing Zihao Xue, Iman Nandi

Budget

Project Budget

project budget

References

References

Code Appendix

Code Appendix

'''
robot.py - Python code for final project demo
Author: Zihao Xue

'''
import numpy as np
import threading
import time
from lidarreader import lidarReader
from motordriver import MotorController
from roboviz import MapVisualizer
from breezyslam.algorithms import RMHC_SLAM
from breezyslam.sensors import XVLidar as LaserModel
from multiprocessing import Process, Queue, Value, Lock
from camera import Camera

MAP_SIZE_PIXELS         = 500
MAP_SIZE_METERS         = 10

class Robot:
  def __init__(self, messagequeue):
    # Connect to Lidar unit
    # when initializing the robot object, we first open lidar and start
    # reading data from the lidar
    self.lidar = lidarReader('/dev/mylidar', 115200)
    self.mover = MotorController()
    
    # Create an RMHC SLAM object with a laser model and optional robot model
    self.slam = RMHC_SLAM(LaserModel(), MAP_SIZE_PIXELS, MAP_SIZE_METERS)
    # Set up a SLAM display
    self.viz = MapVisualizer(MAP_SIZE_PIXELS, MAP_SIZE_METERS, 'SLAM')
    # Initialize empty map
    self.mapbytes = bytearray(MAP_SIZE_PIXELS * MAP_SIZE_PIXELS)
    self.messages = messagequeue
    
    # All the functions should run in the same process so that the variable can be shared
    self.thread = threading.Thread(target=self.navigate, args=())
    # the main-thread will exit without checking the sub-thread at the end of the main thread. 
    # At the same time, all sub-threads with the daemon value of True will end with the main thread, regardless of whether the operation is completed.
    self.thread.daemon = True
    self.navigating = True
    self.thread.start()
    
  # Construct Map should always run in the main process
  def constructmap(self):
    while True:
      time.sleep(0.0001)
      # Update SLAM with current Lidar scan, using first element of (scan, quality) pairs
      self.slam.update(self.lidar.getdata())
      #print(self.lidar.getdata())
      # Get current robot position
      x, y, theta = self.slam.getpos()
      # Get current map bytes as grayscale
      self.slam.getmap(self.mapbytes)
      # Display map and robot pose, exiting gracefully if user closes it
      if not self.viz.display(x/1000., y/1000., theta, self.mapbytes):
        break
    
  def navigate(self):
    while self.navigating:
      time.sleep(0.01) # (yield) allowing reading data from the serailport
      if(self.messages.empty()): # The camera doesn't detect one traffic sign message.empty()
      #if(True): # The camera doesn't detect one traffic sign message.empty()
        front_too_close, left_too_close, right_too_close = False, False, False
        if(self.lidar.angle_180 < 400):
          front_too_close = True
        left_angle = np.argmin(self.lidar.left_container)
        right_angle = np.argmin(self.lidar.right_container)
        if(self.lidar.left_container[left_angle] < 250):
          left_too_close = True
        if(self.lidar.right_container[right_angle] < 250):
          right_too_close = True
        if(front_too_close and left_too_close and right_too_close):
          self.mover.backward()
        elif(front_too_close):
          if(self.lidar.angle_270 > self.lidar.angle_90):
            self.mover.turn_left()
          else:
            self.mover.turn_right()
        elif(left_too_close or right_too_close):
          if(left_too_close):
            self.mover.turn_right(left_angle)
          else:
            self.mover.turn_left(right_angle)
        else:
          self.mover.forward()
        #print(self.lidar.left_container)
        #print(front_too_close, left_too_close, right_too_close, self.lidar.angle_180)
      else:
        sign = self.messages.get() # get the detection id
        if(sign == 1):
          self.mover.stop()
        elif(sign == 2):
          self.mover.turn_right()
        else:
          self.mover.turn_left()  
      

if __name__ == "__main__":
  running = Value('b', True)
  # multiprocessing communication channel between processes
  messagequeue = Queue()
  # multiprocessing required classes
  camera = Camera()
  robot = Robot(messagequeue)
  # processes
  pro1 = Process(target=camera.recording, args=(running, messagequeue,))
  pro1.start()
  #pro2.start()
  robot.constructmap()
  # Wait for the process to safely exit
  pro1.join()
  #pro2.join()
  robot.navigating = False
  robot.thread.join()
  robot.mover.shutdown()
  print("safely exit")
            
Complete source code on GitHub: Demo code with more development coming in the future and development code